perm filename MINSKY[F76,JMC] blob sn#238728 filedate 1976-09-26 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002		This paper expresses opinions about many subjects related to
C00011 00003	Dear Marvin:
C00014 ENDMK
CāŠ—;
	This paper expresses opinions about many subjects related to
artificial intelligence.  The opinions are based on Minsky's great
experience as leader of one of the most successful artificial
intelligence laboratories and merit respect for that reason.
Nevertheless, it seems to me that many are not sufficiently
supported by arguments presented in the paper itself, and some of
them seem to me insufficiently clearly formulated.  In this commentary
I will list some I disagree with, some I agree with (especially when I
have something to add), and some whose formulations I don't understand,
and I will try to give reasons for my opinions.  It is a measure of the
present weakness of the theory of intelligence that so many important
topics have to be treated in this way as matters of opinion.

	The two main sections of the paper are a history of the
progress of ideas in AI and a discussion of the representation
of knowledge.

	The events in the history seem correctly reported, but
I am somewhat doubtful about the picture of steady progress
presented in the historical section.  All the problems solved
are in miniature versions and haven't yet led to a convincing
theory of the phenomena involved.  Therefore, I think the same
problems will have to be solved again until such theories
are developed.

	Unlike the hostile critics of AI, e.g. Dreyfus, Lighthill
and Weizenbaum, I regard this as normal for so difficult a
scientific problem and no reason to be discouraged.

Creativity

	Can a machine do more than its designer?  No!  A machine or
program can do nothing in a second its designer couldn't do in a year if
he weren't so lazy - except in a few specialized arithmetic areas where it
might take the programmer a hundred years to do a second's calculation.  I
personally feel the shortage of time and mental energy very acutely and
think that if the present state of AI permitted me to really multiply my
mental efforts by even a hundred, let alone ten million, I would
accomplish great things.

	Nevertheless, I think that there really is a worthwhile concept of
creativity and that creativity is not just those things we haven't yet
programmed.  In my view, creativity is the introduction of a new element
into a mental situation - where new means that the element is not
constructed in certain standard ways from the what has already been
considered.

	My favorite example is proving it impossible to tile with dominoes
a checkerboard with the two white corner squares removed.  The idea is to
count the numbers of black and white squares that must be covered . I
consider this idea new and creative relative to the concepts that arise
directly from the formulation of the problem, even though the idea quickly
occurs to a mathematician reasonably experienced with combinatorial
problems.  Thus creativity is relative to a stock of ideas, and there is
such a thing as "easy creativity".  Progress in AI and in the psychology
of intelligence requires the scientific study of such easy creativity.
The beginning of such a study lies in formulating a mathematical logical
notion of what concepts are immediately available.

Science Fiction Scenarios

	I think Minsky underestimates the extent to which the writers'
requirements to produce a story have distorted their imagination about
the possibilities of AI and robots.  They need conflict situations
between approximately equal antagonists in which important issues are
seen to be at stake and in which individual effort makes a decisive
and clear cut difference.  It is also necessary for the author to hold
out an idea from his characters until the climax.  Often this is something
that will occur to most people years before the situation arises.

	In considering the "HAL scenario", one must distinguish between
programs that give advice from those given operational powers.  One
should not give an ill-understood program operational powers in
an area where its actions could result in disaster.  Secondly, when
a program gives advice, one can distinguish between using it to
get ideas and following it trustingly.

	There are important problems in comparing the human motivational
structure with that of machines.  As far as I can see,a human doesn't
usually have an overall goal to which all subgoals remain subordinate.
A goal often arises as a kind of metaphor - it is analogous to something
that satisfied hunger or elicited personal approval, but then becomes
entirely independent of its causes and is pursued even in opposition
to them.  Secondly, the weight given to various accepted long range
goals depends on physical state, i.e. when a person is tired, his ideas
of what goals to pursue in the next five years are altered.  Finally,
the human motivational structure evolved under quite different conditions
from those of civilization or even a tribal society.  As a result, it
has all kinds of peculiarities.

	The result of all these considerations is that it would be
both difficult and inappropriate to make computer programs with
motivational structures like those of humans.  Obedience coupled with
the requirement that the human user be given a full picture of the
consequences of all contemplated courses of actions seems attainable
and appropriate.  As we understand more about intelligence, we shall
know more about appropriate control mechanisms for intelligent machines.
My point isn't so much to say that there will be no dangers or problems
but rather that they will be different from those imagined by the
science fiction writers.

Dear Marvin:

	Here is a draft of my comments on your paper for the Future
Study.  Besides that, there are a few rough edges that I suppose you
will want to iron out, so I haven't mentioned them in my direct
comments:

	1. First it seems to me that you forgot the promise in the
first sentence to discuss "the problems and promises that arise from
the prospect of" AI.  Recounting the opinions of science fiction
writers seems inadequate.  You should either discuss it more or change
the focus of the introduction.